253 research outputs found

    Quantifying Differential Privacy under Temporal Correlations

    Full text link
    Differential Privacy (DP) has received increased attention as a rigorous privacy framework. Existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives, which assume that the data are independent, or that adversaries do not have knowledge of the data correlations. However, continuously generated data in the real world tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations in the context of continuous data release. First, we model the temporal correlations using Markov model and analyze the privacy leakage of a DP mechanism when adversaries have knowledge of such temporal correlations. Our analysis reveals that the privacy leakage of a DP mechanism may accumulate and increase over time. We call it temporal privacy leakage. Second, to measure such privacy leakage, we design an efficient algorithm for calculating it in polynomial time. Although the temporal privacy leakage may increase over time, we also show that its supremum may exist in some cases. Third, to bound the privacy loss, we propose mechanisms that convert any existing DP mechanism into one against temporal privacy leakage. Experiments with synthetic data confirm that our approach is efficient and effective.Comment: appears at ICDE 201

    Quantifying Differential Privacy in Continuous Data Release under Temporal Correlations

    Get PDF
    Differential Privacy (DP) has received increasing attention as a rigorous privacy framework. Many existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives to continuously release private data for protecting privacy at each time point (i.e., event-level privacy), which assume that the data at different time points are independent, or that adversaries do not have knowledge of correlation between data. However, continuously generated data tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations. First, we analyze the privacy leakage of a DP mechanism under temporal correlation that can be modeled using Markov Chain. Our analysis reveals that, the event-level privacy loss of a DP mechanism may \textit{increase over time}. We call the unexpected privacy loss \textit{temporal privacy leakage} (TPL). Although TPL may increase over time, we find that its supremum may exist in some cases. Second, we design efficient algorithms for calculating TPL. Third, we propose data releasing mechanisms that convert any existing DP mechanism into one against TPL. Experiments confirm that our approach is efficient and effective.Comment: accepted in TKDE special issue "Best of ICDE 2017". arXiv admin note: substantial text overlap with arXiv:1610.0754

    Turing instability in a diffusive predator-prey model with multiple Allee effect and herd behavior

    Full text link
    Diffusion-driven instability and bifurcation analysis are studied in a predator-prey model with herd behavior and quadratic mortality by incorporating multiple Allee effect into prey species. The existence and stability of the equilibria of the system are studied. And bifurcation behaviors of the system without diffusion are shown. The sufficient and necessary conditions for Turing instability occurring are obtained. And the stability and the direction of Hopf and steady state bifurcations are explored by using the normal form method. Furthermore, some numerical simulations are presented to support our theoretical analysis. We found that too large diffusion rate of prey prevents Turing instability from emerging. Finally, we summarize our findings in the conclusion

    Taming Android Fragmentation through Lightweight Crowdsourced Testing

    Full text link
    Android fragmentation refers to the overwhelming diversity of Android devices and OS versions. These lead to the impossibility of testing an app on every supported device, leaving a number of compatibility bugs scattered in the community and thereby resulting in poor user experiences. To mitigate this, our fellow researchers have designed various works to automatically detect such compatibility issues. However, the current state-of-the-art tools can only be used to detect specific kinds of compatibility issues (i.e., compatibility issues caused by API signature evolution), i.e., many other essential types of compatibility issues are still unrevealed. For example, customized OS versions on real devices and semantic changes of OS could lead to serious compatibility issues, which are non-trivial to be detected statically. To this end, we propose a novel, lightweight, crowdsourced testing approach, LAZYCOW, to fill this research gap and enable the possibility of taming Android fragmentation through crowdsourced efforts. Specifically, crowdsourced testing is an emerging alternative to conventional mobile testing mechanisms that allow developers to test their products on real devices to pinpoint platform-specific issues. Experimental results on thousands of test cases on real-world Android devices show that LAZYCOW is effective in automatically identifying and verifying API-induced compatibility issues. Also, after investigating the user experience through qualitative metrics, users' satisfaction provides strong evidence that LAZYCOW is useful and welcome in practice

    Improved Federated Learning for Handling Long-tail Words

    Get PDF
    Automatic speech recognition (ASR) machine learning models are deployed on client devices that include speech interfaces. ASR models can benefit from continuous learning and adaptation to large-scale changes, e.g., as new words are added to the vocabulary. While federated learning can be utilized to enable continuous learning for ASR models in a privacy preserving manner, the trained model can perform poorly on rarely occurring, long-tail words if the distribution of data used to train the model is skewed and does not adequately represent long-tail words. This disclosure describes federated learning techniques to improve ASR model quality when interpreting long-tail words given an imbalanced data distribution. Two different approaches - probabilistic sampling and client loss weighting - are described herein. In probabilistic sampling, the federated clients that include fewer long-tail words are less likely to be selected during training. In client loss weighting, incorrect predictions on long-tail words are more heavily penalized than for other words
    corecore